Learning Coordinate Covariances via Gradients
نویسندگان
چکیده
We introduce an algorithm that learns gradients from samples in the supervised learning framework. An error analysis is given for the convergence of the gradient estimated by the algorithm to the true gradient. The utility of the algorithm for the problem of variable selection as well as determining variable covariance is illustrated on simulated data as well as two gene expression datasets. For square loss we provide a very efficient implementation with respect to both memory and time.
منابع مشابه
Correlations in Nonequilibrium Steady States
We present the results of a detailed study of energy correlations at steady state for a 1-D model of coupled energy and matter transport. Our aim is to discover — via theoretical arguments, conjectures, and numerical simulations — how spatial covariances scale with system size, their relations to local thermodynamic quantities, and the randomizing effects of heat baths. Among our findings are t...
متن کاملCorrelations in Nonequilibrium Steady States of Random Halves Models
We present the results of a detailed study of energy correlations at steady state for a 1-D model of coupled energy and matter transport. Our aim is to discover — via theoretical arguments, conjectures, and numerical simulations — how spatial covariances scale with system size, their relations to local thermodynamic quantities, and the randomizing effects of heat baths. Among our findings are t...
متن کاملVisual Tracking using Learning Histogram of Oriented Gradients by SVM on Mobile Robot
The intelligence of a mobile robot is highly dependent on its vision. The main objective of an intelligent mobile robot is in its ability to the online image processing, object detection, and especially visual tracking which is a complex task in stochastic environments. Tracking algorithms suffer from sequence challenges such as illumination variation, occlusion, and background clutter, so an a...
متن کاملLearning Gradients: Predictive Models that Infer Geometry and Statistical Dependence
The problems of dimension reduction and inference of statistical dependence are addressed by the modeling framework of learning gradients. The models we propose hold for Euclidean spaces as well as the manifold setting. The central quantity in this approach is an estimate of the gradient of the regression or classification function. Two quadratic forms are constructed from gradient estimates: t...
متن کاملEfficient Sequence Regression by Learning Linear Models in All-Subsequence Space
We present a new approach for learning a sequence regression function, i.e., a mapping from sequential observations to a numeric score. Our learning algorithm employs coordinate gradient descent and Gauss-Southwell optimization in the feature space of all subsequences. We give a tight upper bound for the coordinate wise gradients of squared error loss that enables efficient Gauss-Southwell sele...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Journal of Machine Learning Research
دوره 7 شماره
صفحات -
تاریخ انتشار 2006